关于点击率(CTR)预测的最新研究通过对更长的用户行为序列进行建模,已达到新的水平。除其他外,两阶段的方法是用于工业应用的最先进的解决方案(SOTA)。两阶段方法首先训练检索模型,以事先截断长行为序列,然后使用截短序列训练CTR模型。但是,检索模型和CTR模型是分别训练的。因此,CTR模型中检索到的子序列不准确,它降低了最终性能。在本文中,我们提出了一个端到端范式来建模长行为序列,与现有模型相比,该序列能够实现卓越的性能以及出色的成本效益。我们的贡献是三倍:首先,我们提出了一个名为ETA-NET的基于哈希的有效目标(TA)网络,以基于低成本的位置操作来启用端到端的用户行为检索。提出的ETA-NET可以通过顺序数据建模的数量级来降低标准TA的复杂性。其次,我们建议将通用系统体系结构作为一种可行的解决方案,用于在工业系统上部署ETA-NET。特别是,与SOTA两阶段方法相比,ETA-NET已部署在TAOBAO的推荐系统上,并在CTR上带来了1.8%的升降机和3.1%的升降机(GMV)。第三,我们在离线数据集和在线A/B测试上进行了广泛的实验。结果证明,在CTR预测性能和在线成本效益方面,所提出的模型大大优于现有的CTR模型。 ETA-NET现在为TAOBAO的主要流量提供服务,每天为数亿用户提供服务。
translated by 谷歌翻译
在复杂的场景中,尤其是在城市交通交叉点,对实体关系和运动行为的深刻理解对于实现高质量的计划非常重要。我们提出了有关交通信号灯D2-Tpred的轨迹预测方法,该方法使用空间动态交互图(SDG)和行为依赖图(BDG)来处理空间空间中不连续依赖的问题。具体而言,SDG用于通过在每帧中具有动态和可变特征的不同试剂的子图来捕获空间相互作用。 BDG用于通过建模当前状态对先验行为的隐式依赖性来推断运动趋势,尤其是与加速度,减速或转向方向相对应的不连续运动。此外,我们提出了一个新的数据集,用于在称为VTP-TL的交通信号灯下进行车辆轨迹预测。我们的实验结果表明,与其他轨迹预测算法相比,我们的模型在ADE和FDE方面分别获得了{20.45%和20.78%}的改善。数据集和代码可在以下网址获得:https://github.com/vtp-tl/d2-tpred。
translated by 谷歌翻译
智能交通灯管制系统(ITLC)是一个典型的多机构系统(MAS),包括多条道路和交通信号灯。为ITLCS构造MAS模型是减轻交通拥堵的基础。 MAS的现有方法主要基于多代理深度强化学习(MADRL)。尽管MABRL的深神经网络(DNN)有效,但训练时间很长,并且很难追踪参数。最近,广泛的学习系统(BLS)提供了一种选择性的方法,可以通过平坦的网络在深层神经网络中学习。此外,广泛的强化学习(BRL)在单一代理深层增强学习(SADRL)问题中扩展了BLS,并具有有希望的结果。但是,BRL不关注代理的复杂结构和相互作用。由MADRL的特征和BRL问题的激励,我们提出了一个多机构的广泛强化学习(MABRL)框架,以探索BLS在MAS中的功能。首先,与大多数使用一系列深神经网络结构的MADRL方法不同,我们用广泛的网络对每个代理进行建模。然后,我们引入了动态的自我循环交互机制,以确认“ 3W”信息:何时进行交互,代理需要考虑哪些信息,要传输哪些信息。最后,我们根据智能交通灯控制场景进行实验。我们将MABRL方法与六种不同的方法进行比较,并在三个数据集上进行实验结果验证了MABRL的有效性。
translated by 谷歌翻译
图表卷积网络(GCNS)的方法在基于骨架的动作识别任务上实现了高级性能。然而,骨架图不能完全代表骨架数据中包含的运动信息。此外,基于GCN的方法中的骨架图的拓扑是根据自然连接手动设置的,并且它为所有样本都固定,这不能很好地适应不同的情况。在这项工作中,我们提出了一种新的动态超图卷积网络(DHGCN),用于基于骨架的动作识别。 DHGCN使用超图来表示骨架结构,以有效利用人类关节中包含的运动信息。根据其移动动态地分配了骨架超图中的每个接头,并且我们模型中的超图拓扑可以根据关节之间的关系动态调整到不同的样本。实验结果表明,我们的模型的性能在三个数据集中实现了竞争性能:动力学 - 骨架400,NTU RGB + D 60和NTU RGB + D 120。
translated by 谷歌翻译
行人轨迹预测是自动驾驶的重要技术,近年来已成为研究热点。以前的方法主要依靠行人的位置关系来模型社交互动,这显然不足以代表实际情况中的复杂病例。此外,大多数现有工作通常通常将场景交互模块作为独立分支介绍,并在轨迹生成过程中嵌入社交交互功能,而不是同时执行社交交互和场景交互,这可能破坏轨迹预测的合理性。在本文中,我们提出了一个名为社会软关注图卷积网络(SSAGCN)的一个新的预测模型,旨在同时处理行人和环境之间的行人和场景相互作用之间的社交互动。详细说明,在建模社交互动时,我们提出了一种新的\ EMPH {社会软关注功能},其充分考虑了行人之间的各种交互因素。并且它可以基于各种情况下的不同因素来区分行人周围的人行力的影响。对于物理互动,我们提出了一个新的\ emph {顺序场景共享机制}。每个时刻在每个时刻对一个代理的影响可以通过社会柔和关注与其他邻居共享,因此场景的影响在空间和时间尺寸中都是扩展。在这些改进的帮助下,我们成功地获得了社会和身体上可接受的预测轨迹。公共可用数据集的实验证明了SSAGCN的有效性,并取得了最先进的结果。
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Learning the underlying distribution of molecular graphs and generating high-fidelity samples is a fundamental research problem in drug discovery and material science. However, accurately modeling distribution and rapidly generating novel molecular graphs remain crucial and challenging goals. To accomplish these goals, we propose a novel Conditional Diffusion model based on discrete Graph Structures (CDGS) for molecular graph generation. Specifically, we construct a forward graph diffusion process on both graph structures and inherent features through stochastic differential equations (SDE) and derive discrete graph structures as the condition for reverse generative processes. We present a specialized hybrid graph noise prediction model that extracts the global context and the local node-edge dependency from intermediate graph states. We further utilize ordinary differential equation (ODE) solvers for efficient graph sampling, based on the semi-linear structure of the probability flow ODE. Experiments on diverse datasets validate the effectiveness of our framework. Particularly, the proposed method still generates high-quality molecular graphs in a limited number of steps.
translated by 谷歌翻译
Despite some successful applications of goal-driven navigation, existing deep reinforcement learning-based approaches notoriously suffers from poor data efficiency issue. One of the reasons is that the goal information is decoupled from the perception module and directly introduced as a condition of decision-making, resulting in the goal-irrelevant features of the scene representation playing an adversary role during the learning process. In light of this, we present a novel Goal-guided Transformer-enabled reinforcement learning (GTRL) approach by considering the physical goal states as an input of the scene encoder for guiding the scene representation to couple with the goal information and realizing efficient autonomous navigation. More specifically, we propose a novel variant of the Vision Transformer as the backbone of the perception system, namely Goal-guided Transformer (GoT), and pre-train it with expert priors to boost the data efficiency. Subsequently, a reinforcement learning algorithm is instantiated for the decision-making system, taking the goal-oriented scene representation from the GoT as the input and generating decision commands. As a result, our approach motivates the scene representation to concentrate mainly on goal-relevant features, which substantially enhances the data efficiency of the DRL learning process, leading to superior navigation performance. Both simulation and real-world experimental results manifest the superiority of our approach in terms of data efficiency, performance, robustness, and sim-to-real generalization, compared with other state-of-art baselines. Demonstration videos are available at \colorb{https://youtu.be/93LGlGvaN0c.
translated by 谷歌翻译